Contrastive Self-supervised Representation Learning Using Synthetic Data
نویسندگان
چکیده
Abstract Learning discriminative representations with deep neural networks often relies on massive labeled data, which is expensive and difficult to obtain in many real scenarios. As an alternative, self-supervised learning that leverages input itself as supervision strongly preferred for its soaring performance visual representation learning. This paper introduces a contrastive framework generalizable the synthetic data can be obtained easily complete controllability. Specifically, we propose optimize task physical property prediction simultaneously. Given scene, first aims maximize agreement between pair of images generated by our proposed view sampling module, while second predict three maps, i.e., depth, instance contour surface normal maps. In addition, feature-level domain adaptation technique adversarial training applied reduce difference realistic data. Experiments demonstrate method achieves state-of-the-art several recognition datasets.
منابع مشابه
Time-Contrastive Networks: Self-Supervised Learning from Video
We propose a self-supervised approach for learning representations and robotic behaviors entirely from unlabeled videos recorded from multiple viewpoints, and study how this representation can be used in two robotic imitation settings: imitating object interactions from videos of humans, and imitating human poses. Imitation of human behavior requires a viewpoint-invariant representation that ca...
متن کاملSemi-supervised Data Representation via Affinity Graph Learning
We consider the general problem of utilizing both labeled and unlabeled data to improve data representation performance. A new semi-supervised learning framework is proposed by combing manifold regularization and data representation methods such as Non negative matrix factorization and sparse coding. We adopt unsupervised data representation methods as the learning machines because they do not ...
متن کاملWeakly-Supervised Learning with Cost-Augmented Contrastive Estimation
We generalize contrastive estimation in two ways that permit adding more knowledge to unsupervised learning. The first allows the modeler to specify not only the set of corrupted inputs for each observation, but also how bad each one is. The second allows specifying structural preferences on the latent variable used to explain the observations. They require setting additional hyperparameters, w...
متن کاملCross-Domain Self-supervised Multi-task Feature Learning using Synthetic Imagery
In human learning, it is common to use multiple sources of information jointly. However, most existing feature learning approaches learn from only a single task. In this paper, we propose a novel multi-task deep network to learn generalizable high-level visual representations. Since multitask learning requires annotations for multiple properties of the same training instance, we look to synthet...
متن کاملContrastive Learning Using Spectral Methods
In many natural settings, the analysis goal is not to characterize a single data set in isolation, but rather to understand the difference between one set of observations and another. For example, given a background corpus of news articles together with writings of a particular author, one may want a topic model that explains word patterns and themes specific to the author. Another example come...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Automation and Computing
سال: 2021
ISSN: ['1751-8520', '1476-8186']
DOI: https://doi.org/10.1007/s11633-021-1297-9